Goto

Collaborating Authors

 skeletal feature


NeuralHumanPerformer: LearningGeneralizable RadianceFieldsforHumanPerformanceRendering

Neural Information Processing Systems

Code will be madepublicuponpublication. Imagefeatureextractor. RC d, from the previously constructed time-augmented skeletal featuress 1:C,t RL C d. Inspired by The SparseConvNet consists in 3D sparse convolutions toprocess the input volume, diffusing the skeletal features into the nearby 3D space. The overview of the cross-attention between the sampled time-augmented skeletal features and time-specific pixel-aligned features is illustratedinFig.2. Wediscuss the additional details about the datasets used, including the train/test splits and license information. C.1 ZJU-MoCap We use the512 512 videos for the training and testing following the original Neural Body [7].



Anticipatory Fall Detection in Humans with Hybrid Directed Graph Neural Networks and Long Short-Term Memory

Cho, Younggeol, Solak, Gokhan, Nocentini, Olivia, Lorenzini, Marta, Fortuna, Andrea, Ajoudani, Arash

arXiv.org Artificial Intelligence

Detecting and preventing falls in humans is a critical component of assistive robotic systems. While significant progress has been made in detecting falls, the prediction of falls before they happen, and analysis of the transient state between stability and an impending fall remain unexplored. In this paper, we propose a anticipatory fall detection method that utilizes a hybrid model combining Dynamic Graph Neural Networks (DGNN) with Long Short-Term Memory (LSTM) networks that decoupled the motion prediction and gait classification tasks to anticipate falls with high accuracy. Our approach employs real-time skeletal features extracted from video sequences as input for the proposed model. The DGNN acts as a classifier, distinguishing between three gait states: stable, transient, and fall. The LSTM-based network then predicts human movement in subsequent time steps, enabling early detection of falls. The proposed model was trained and validated using the OUMVLP-Pose and URFD datasets, demonstrating superior performance in terms of prediction error and recognition accuracy compared to models relying solely on DGNN and models from literature. The results indicate that decoupling prediction and classification improves performance compared to addressing the unified problem using only the DGNN. Furthermore, our method allows for the monitoring of the transient state, offering valuable insights that could enhance the functionality of advanced assistance systems.